iT邦幫忙

2019 iT 邦幫忙鐵人賽

DAY 21
1
Kubernetes

在地端建置Angular+ASP.NET Core的DevOps環境系列 第 21

day21_k8s07_Pod Preset,RBAC,helm,ingress

  • 分享至 

  • xImage
  •  

前言

首過重要的儲存篇,今天是一些我也不太懂了的其他主題,就見笑啦~

Pod Preset

在建立Pod的時候注入(injecting)像secrets、ConfigMaps、volumes、 volume mounts、environment variables等資源

使用情境

如果你要deploy一堆applications
可以先把想inject的資料定義在yaml檔
而且符合(match)的才作用(用selector、matchLabels)

範例1

請注意關鍵字

  • label role: frontend

yaml檔如下:

  • podpreset/preset.yaml
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
  name: allow-database
spec:
  selector:
    matchLabels: # 關鍵在這行:matchLabels
      role: frontend # 適用到符合的labels
  env:
    - name: DB_PORT
      value: "6379"
  volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
    - name: cache-volume
      emptyDir: {}
  • 1、建pod presets object
$ kubectl create -f https://k8s.io/examples/podpreset/preset.yaml
# 刪
$ kubectl delete podpreset allow-database
podpreset "allow-database" deleted
  • podpreset/pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: website
  labels:
    app: website
    role: frontend # 重點在這個label
spec:
  containers:
    - name: website
      image: nginx
      ports:
        - containerPort: 80
  • 2、建pod
$ kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
  • 3、再看看pod website的yaml檔,已經被加料了
$ kubectl get pod website -o yaml
  • podpreset/merged.yaml
apiVersion: v1
kind: Pod
metadata:
  name: website
  labels:
    app: website
    role: frontend
  annotations:
    podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version"
spec:
  containers:
    - name: website
      image: nginx
      volumeMounts:
        - mountPath: /cache
          name: cache-volume
      ports:
        - containerPort: 80
      env:
        - name: DB_PORT
          value: "6379"
  volumes:
    - name: cache-volume
      emptyDir: {}

還有很多其他範例

因為看不懂,就沒寫了
請勇者們自行前往挑戰囉
https://kubernetes.io/docs/tasks/inject-data-application/podpreset/

===

RBAC

RBAC真的很重要!!
如果你已成功使用cluster(例如用minikube或kubeadm建的)
一定要回頭把RBAC補起來,雖然我自己也是看不懂啦~

推薦Kyle Bai大大的這一篇,再延伸閱讀喔

k8s有多種authorization mode(module)可以用

  • node:由kubelets來提出authorizes API requests
  • ABAC:attribute-based access control
    ABAC的permission control(權限控制)是由policies來設定一堆相關的attributes
    ABAC的permission control(權限控制)沒辦法設太細項
  • RBAC:role based access control
    依roles來調整權限
    可以動態設定policies
  • Webhood:送出authorization request給外部的REST interface(例如:authorization server,認證伺服器)
    適合自己寫authorization server
    authorization request 的 playload是JSON格式的
    authorization server再reply(回覆)access granted或access denied

指定authroization mode

要用哪種authroization mode
必須在API server啟用時指定
--authorization-mod = RBAC

kops跟kubeadm預設都用RBAC

minikube的話,可以start時指定

$ minikube start --extra-config=apiserver.Authorization.Mode=RBAC

設RBAC的方式

  • 寫在yaml檔,apply到cluster
    使用kubectl加RBAC resources
  • 定義role,再把users/groups加到role
  • roles可以限使用的namespace(limited to a namespace)
  • roles可以允許連所有namespace(access applies to all namespaces)
    Role (單一namespace)、ClusterRole(cluster範圍級的)
    RoleBinding(單一namespace)、ClusterRoleBinding(cluster範圍級的)

簡單範例

  • 1、定義Role,apply到cluster
kind: Role
# kind: ClusterRole # 如果要適用到所有namespaces,就改用ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default # 授權範圍就只有default namespace
  name: pod-reader
roles:
- apiGroups: [""]
  resources: ["pods", "secrets"] # 可使用的資源:pods、secrets
# 還有deployments
  verbs: ["get", "watch", "list"] # 可操作的動作(大概就是「看」read)
# 還有create,update,patch,delete
  • 2、授權給user bob
kind: RoleBinding
# kind: ClusterRoleBinding # 如果要Binding ClusterRole就改成ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: read-pods
subjects: # 訂閱者,給bob這個role
- kind: User
  name: bob
  apiGroup: rbac.authorization.k8s.io
roleRef: # 要binding的role
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
  • 操作
    (實務上操作可能還要加憑證)
$ kubectl create -f role.yaml
$ kubectl delete -f role.yaml # 如果要重複練習
$ kubectl config use-context bob # 切換context到bob
$ kubectl get pods -n kube-system

===

Helm(package manager)

Helm蠻常看到的,類似apt、yum、npm,Helm是for kubernetes的套件管理工具
Helm是由CNCF,The Cloud Native Computing Foundation所維護

SIG-Apps is a Special Interest Group for deploying and operating apps in Kubernetes. They meet each week to demo and discuss tools and projects.

1、安裝helm

https://docs.helm.sh/using_helm/#installing-helm

  • macOS
$ brew install kubernetes-helm
  • Linux
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh

2、如果你是用RBAC,要加一個ServiceAccound and RBAC rules

建一個cluster-admin role的service account tiller

  • rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin # 這個role本來就有了,所要不用再建,用roleRef即可
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

$ kubectl create -f rbac-config.yaml # 建立service account "tiller"
serviceaccount "tiller" created  
clusterrolebinding "tiller" created
$ helm init --service-account tiller

建一個namespace

$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created

定義一個Role,可以管理所有resources

  • role-tiller.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-manager
  namespace: tiller-world # 可以管理所有namespace "tiller-world"的resources
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
  • rolebinding-tiller.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-binding
  namespace: tiller-world
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: tiller-world
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io
# 建Role
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
# role binding
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created

2、helm init

# 建完service acoount跟namespace之後,終於可以初使化啦
$ helm init --service-account tiller --tiller-namespace tiller-world
$HELM_HOME has been configured at /Users/awesome-user/.helm.

Tiller (the Helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!

3、安裝charts

# 安裝nginx,指定namespace
$ helm install nginx --tiller-namespace tiller-world --namespace tiller-world
NAME:   wayfaring-yak
LAST DEPLOYED: Mon Aug  7 16:00:16 2017
NAMESPACE: tiller-world
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod
NAME                  READY  STATUS             RESTARTS  AGE
wayfaring-yak-alpine  0/1    ContainerCreating  0         0s

helm常用指令

$ helm init # 初使化,在cluster安裝Tiller
$ helm reset # remove Tiller
$ helm install # 安裝chart
$ helm search # 找chart
$ helm list # 列出已安裝chart清單
$ helm upgrade # 升級
$ helm rollback # 退回前一版

charts

Helm使用的packaging format
charts用來描述kubernetes resource,本是是a collection of files(文件集合)
一個chart可以deploy一個app(例如:database,mysql chart)

apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
drink: {{ .Values.favoriteDrink }}

其他情境

把tiller裝到其他namespace

# 先建一個新的namespace "myorg-system"
$ kubectl create namespace myorg-system
namespace "myorg-system" created
# 把tiller裝到其他namespace
$ kubectl create serviceaccount tiller --namespace myorg-system
serviceaccount "tiller" created
# 讓tiller可管理所有myorg-users的resource
# 建一個role
# role-tiller.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-manager
  namespace: myorg-users
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]

$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
# 作role binding
# rolebinding-tiller.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-binding
  namespace: myorg-users
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: myorg-system
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io

$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
# 步驟幾乎都一樣

加強安全性

Helm(憑證) ----TLS/SSL---- Tiller(憑證)

完整步驟請參考官網文件(例如:完整的產生憑證指令)
https://docs.helm.sh/using_helm/#using-ssl-between-helm-and-tiller

簡單的說,就是helm client跟tiller server都餵憑證,就可以用有加密的port了
遇到困難…就靠google啦

tiller server-side的設定

$ helm init --dry-run --debug --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# --dry-run # 類似測試模式,可以看到比較多資訊
# --dry-run、--debug可不下
$ helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem

# 看一下你的tiller有沒有建成功
$ kubectl -n kube-system get deployment
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
... other stuff
tiller-deploy   1         1         1            1           2m

$ kubectl get pods -n kube-system

再來是helm client-side的設定啦

# copy憑證(略)
$ cp ca.cert.pem $(helm home)/ca.pem
$ cp helm.cert.pem $(helm home)/cert.pem
$ cp helm.key.pem $(helm home)/key.pem
# 餵你的helm吃憑證
$ helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
# enable TLS
$ helm ls --tls

Helm Charts

helm charts的架構

mychart/
Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: A helm chart for k8s cluster
version: 0.1.0
values.yaml
key:value
templates/
deployment.yaml
service.yaml

範例

# 加helm repository
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install bitnami/node
# 也可以直接設定repository
$ helm install --name my-release \
  --set repository=https://github.com/jbianquetti-nami/simple-node-app.git,replicas=2 \
    bitnami/node

$ helm install --name my-release bitnami/node # 建議加release name
$ helm delete my-release # uninstall the chart

helm可以很簡單的裝ingress controller

# 安裝
$ helm install stable/nginx-ingress
# 在deploy the node helm chart時,指定ingress
$ helm install --name my-release bitnami/node --set ingress.enabled=true,ingress.host=example.com,service.type=ClusterIP

===

ingress

今天參考:

github(AWS、GCE、Azure都支援呢)

https://github.com/kubernetes/ingress-nginx

教學文件

https://kubernetes.github.io/ingress-nginx/deploy/#minikube

哇,ingress也是非常地重要
這一篇也不能打混

從 Kubernetes 1.1 以後,用ingress來處理cluster的inbound connections(流進來的流量)

何謂ingress

借用一下官網的說明

    internet # 客戶怎麼使用服務?
        |
  ------------ # 怎麼expose出去呢?還可以做什麼?
  [ Services ] # cluster 可以用ip互連
    internet # 透過POST Ingress resource到API server來請求ingress
        |
   [Ingress Controller] # LoadBalancer(專門for HTTP(s)-based的application)
   [Ingress] # 依據RULE,把需求導到對的pod&service上、SSL/TLS(掛secret)
     |     |             # 架在Master Node上
   --|-----|--
   [ Services ]
  • 目前的ingress似乎還在bata階段
    除了GCE/Google Kubernetes Engine以外的,得運行一個ingress controller的pod。

  • ingress跟ingress controller似乎是不同的2個東西

  • ingress controller可以作HTTP(s)-based的load balancer,所以如果你是用
    AWS LoadBalancer,可以把全部的HTTP(s)-based直接給ingress controller,降低成本

不一定要用ingress

跟ingress類似的(替代方案)

  • Service.Type=LoadBalancer
  • Service.Type=NodePort # 這個感覺存在比較久了
  • Port Proxy
  • Service load balancer

我們先從最簡單的ingress開始

範例1:單一service的ingress

沒有rule的ingress,所有流量都會送到單一service

  • ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
spec:
  backend:
    serviceName: testsvc
    servicePort: 80
# 建一個ingress Object
$ kubectl create -f ingress.yaml
# 看一下ingress object
$ kubectl get ingress test-ingress
NAME           HOSTS     ADDRESS           PORTS     AGE
test-ingress   *         107.178.254.228   80        59s
# Ingress Object:test-ingress
# 上面沒有任何RULE
# 有一個service: testsvc:80 (由ingress controller分配ip:107.178.254.228)
# 所有107.178.254.228:80的流量都會導到這個service

範例2:有寫rule的,依path導到正確的service

假設domain name是foo.bar.com (IP:178.91.123.132)
有2個service都用port 80,要如何設定ingress,來把需求導到正確的service?
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths: # Ingress Controller會依path導到正確的service
      - path: /foo # http://foo.bar.com:80/foo
        backend:
          serviceName: s1
          servicePort: 80
      - path: /bar # http://foo.bar.com:80/bar
        backend:
          serviceName: s2
          servicePort: 80
# Ingress Object: test 會變成這樣
$ kubectl describe ingress test
Name:             test
Namespace:        default
Address:          178.91.123.132 # 對外的ip
Default backend:  default-http-backend:80 (10.8.2.3:8080)
Rules:
  Host         Path  Backends
  ----         ----  --------
  foo.bar.com
               /foo   s1:80 (10.8.0.90:80) # 內部的virtual ip
               /bar   s2:80 (10.8.0.91:80)
Annotations:
  nginx.ingress.kubernetes.io/rewrite-target:  /
Events:
  Type     Reason  Age                From                     Message
  ----     ------  ----               ----                     -------
  Normal   ADD     22s                loadbalancer-controller  default/test

範例3:Name-based的VM,用同1個IP,對n個主機名稱

這個也是非常實用的,假設情境:你只有一個實體IP,但有一堆網站
foo.bar.com --| |-> foo.bar.com s1:80 # 露天拍賣
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80 # 商店街

像這樣的形式是基於Host header來作路由請求
Host = uri-host [ ":" port ] # https://tools.ietf.org/html/rfc7230#section-5.4

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: s2
          servicePort: 80

範例3:TLS

用憑證讓網站有https,為了提高成功率,請用nginx
目前ingress只支援1個TLS port 443

  • 假設憑證已經放在secret object了
apiVersion: v1
data: # TLS必備的
  tls.crt: base64 encoded cert # certificate  
  tls.key: base64 encoded key # private key
kind: Secret
metadata:
  name: testsecret
  namespace: default
type: Opaque

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: no-rules-map
spec:
  tls:
  - secretName: testsecret # 將憑證掛在 Ingress 底下
  backend:
    serviceName: s1
    servicePort: 80

這只是起頭而已
再來web server那麼怎麼設定,請參考:

(強烈推薦喔!!)

節目的最後…

既有的ingress object如何變更?

# 方法1:
$ kubectl edit ingress test
# 方法2:
$ kubectl replace -f 修改過的.yaml

上一篇
day20_k8s06_儲存篇_下集_Secrets,ConfigMap
下一篇
day22_k8s08_Auto Scaling,ReplicaSet,Daemon Set,Proxy,StatefulSet
系列文
在地端建置Angular+ASP.NET Core的DevOps環境31
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言